It is a common sense that datasets with high-quality data samples play an important role in artificial intelligence (AI), machine learning (ML) and related studies. However, although AI/ML has been introduced in wireless researches long time ago, few datasets are commonly used in the research community. Without a common dataset, AI-based methods proposed for wireless systems are hard to compare with both the traditional baselines and even each other. The existing wireless AI researches usually rely on datasets generated based on statistical models or ray-tracing simulations with limited environments. The statistical data hinder the trained AI models from further fine-tuning for a specific scenario, and ray-tracing data with limited environments lower down the generalization capability of the trained AI models. In this paper, we present the Wireless AI Research Dataset (WAIR-D)1, which consists of two scenarios. Scenario 1 contains 10,000 environments with sparsely dropped user equipments (UEs), and Scenario 2 contains 100 environments with densely dropped UEs. The environments are randomly picked up from more than 40 cities in the real world map. The large volume of the data guarantees that the trained AI models enjoy good generalization capability, while fine-tuning can be easily carried out on a specific chosen environment. Moreover, both the wireless channels and the corresponding environmental information are provided in WAIR-D, so that extra-information-aided communication mechanism can be designed and evaluated. WAIR-D provides the researchers benchmarks to compare their different designs or reproduce results of others. In this paper, we show the detailed construction of this dataset and examples of using it.
translated by 谷歌翻译
本文指的是基于Pyin的多个基本频率(多个F0),一种用于提取单声音乐基本频率(F0)的算法,以及训练有素的卷积神经网络(CNN)模型,其中倾斜的卷积神经网络(CNN)模型,在其中俯仰的sallience级功能产生输入信号以估计多个F0。本文讨论了这两种算法及其相应的优势和缺点的实施。分析这两种方法的不同性能,将Pyin应用于补充从训练有素的CNN模型中提取的F0,以结合这两种算法的优势。为了进行评估,使用了两个小提琴演奏的四件,并评估了模型的性能,以提取F0曲线的平坦度。结果表明,从单声音乐和复音音乐中提取F0时,组合模型的表现优于原始算法。
translated by 谷歌翻译
音频字幕是一项根据内容生成音频描述的任务。由于高复杂性,预训练的模型被广泛用于音频字幕。除非重新训练全面的系统,否则很难确定预训练的模型对音频字幕系统的贡献。为了防止耗时和消耗能量的再培训过程,有必要为音频字幕中的预训练模型提出绩效倾向。在本文中,研究了一系列预训练的模型,以进行提取的音频功能与音频字幕的性能之间的相关性。根据实验结果提出了两个预测因子。结果表明,提取的音频特征的峰度和偏度可能是由于峰度和偏度之间的高相关性而导致预训练音频的音频字幕系统的性能指标。音频功能和音频字幕系统的性能。
translated by 谷歌翻译
关键字斑点(KWS)对基于语音的用户交互与边缘的低功耗设备有益。边缘设备通常始终在线,因此Edge Computing带来带宽节省和隐私保护。这些设备通常具有有限的内存空间,计算性能,功率和成本,例如基于皮质的微控制器。面临的挑战是满足这些设备深度学习的高计算和低延迟要求。本文首先显示了我们在STM32F7微控制器上使用Cortex-M7 Core @216MHz和512KB静态RAM运行的小脚印KWS系统。我们选择的卷积神经网络(CNN)体系结构简化了KW的操作数量,以满足边缘设备的约束。我们的基线系统为每37ms生成分类结果,包括实时音频功能提取部分。本文进一步评估了微控制器上不同修剪和量化方法的实际性能,包括稀疏性的不同粒度,跳过零重量,重量优先级环路和SIMD指令。结果表明,对于微控制器,加速非结构化的修剪模型面临着巨大的挑战,并且结构化的修剪比非结构化的修剪更友好。结果还验证了量化和SIMD指令的性能改进。
translated by 谷歌翻译
在本文中,介绍了用于音乐和音乐技术会议(CSMT)组织的数据挑战的数据集。CSMT数据挑战要求参与者识别给定的旋律是否由计算机生成或由人类组成。数据集由两个部分组成:开发数据集和评估数据集。开发数据集仅包含计算机生成的旋转,而评估数据集包含计算机生成的旋律和人类组成的旋律。数据集的目的是通过学习产生的旋律的特征来检查是否可以区分计算机生成的旋律。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译